Minimizing The Misclassification Error Rate Using a Surrogate Convex Loss

نویسندگان

  • Shai Ben-David
  • David Loker
  • Nathan Srebro
  • Karthik Sridharan
چکیده

We carefully study how well minimizing convex surrogate loss functions corresponds to minimizing the misclassification error rate for the problem of binary classification with linear predictors. We consider the agnostic setting, and investigate guarantees on the misclassification error of the loss-minimizer in terms of the margin error rate of the best predictor. We show that, aiming for such a guarantee, the hinge loss is essentially optimal among all convex losses.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A New Formulation for Cost-Sensitive Two Group Support Vector Machine with Multiple Error Rate

Support vector machine (SVM) is a popular classification technique which classifies data using a max-margin separator hyperplane. The normal vector and bias of the mentioned hyperplane is determined by solving a quadratic model implies that SVM training confronts by an optimization problem. Among of the extensions of SVM, cost-sensitive scheme refers to a model with multiple costs which conside...

متن کامل

Information Divergence Measures and Surrogate Loss Functions

In this extended abstract, we provide an overview of our recent work on the connection between information divergence measures and convex surrogate loss functions used in statistical machine learning. Further details can be found in the technical report [7] and conference paper [6]. The class of f -divergences, introduced independently by Csiszar [4] and Ali and Silvey [1], arise in many areas ...

متن کامل

Classification Methods with Reject Option Based on Convex Risk Minimization

In this paper, we investigate the problem of binary classification with a reject option in which one can withhold the decision of classifying an observation at a cost lower than that of misclassification. Since the natural loss function is non-convex so that empirical risk minimization easily becomes infeasible, the paper proposes minimizing convex risks based on surrogate convex loss functions...

متن کامل

Optimizing the Classification Cost using SVMs with a Double Hinge Loss

The objective of this study is to minimize the classification cost using Support Vector Machines (SVMs) Classifier with a double hinge loss. Such binary classifiers have the option to reject observations when the cost of rejection is lower than that of misclassification. To train this classifier, the standard SVM optimization problem was modified by minimizing a double hinge loss function consi...

متن کامل

Composite Binary Losses

We study losses for binary classification and class probability estimation and extend the understanding of them from margin losses to general composite losses which are the composition of a proper loss with a link function. We characterise when margin losses can be proper composite losses, explicitly show how to determine a symmetric loss in full from half of one of its partial losses, introduc...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012